N_Port ID Virtualization, a new HBA feature, lets you assign different WWNs to each virtual server and determine which applications run on specific hardware and even when they run.

Fibre Channel (FC) host bus adapters (HBA) are becoming one of the pivotal points in the emerging virtual data center. With nearly every facet of data centers -- servers, storage, storage area networks (SAN) and I/O -- becoming virtual, HBAs are taking on more responsibilities to ensure that each virtual server accesses only its assigned virtual storage. As FC HBAs assume more critical but largely invisible roles, storage administrators must ensure that their SAN infrastructures are sufficiently modernized to support the new capabilities of FC HBAs.

Changes need to occur at all layers of the storage network to make the virtual data center a reality. With FC HBAs initially designed to enable one physical server to connect to one or more logical unit numbers (LUN) on multiple storage arrays, the introduction of a new HBA software feature called N_Port ID Virtualization (NPIV) allows admins to assign different worldwide names (WWNs) to each virtual server. This means a virtual server can access only the volumes specifically assigned to it.

NPIV creates new dependencies at the server and network layers in the storage fabric (see NPIV checklist on Storage's Web site). For instance, OSes like AIX, HP-UX, Linux, Solaris, VMware and Microsoft Corp.'s Virtual Server must recognize and support the new APIs associated with NPIV before they can move a virtual server image from one set of hardware to another. On FC switches and directors, admins may need to upgrade their network OS to recognize and manage NPIV logins, pay additional licensing fees to use this feature on their network and create new zones for the virtual WWNs to access their assigned storage. In the new virtual storage environment, FC HBA features such as beaconing and link-speed indicators take on increased importance for admins to identify the actual physical location of an FC HBA in a fabric or to determine at a glance the speed at which an HBA port is connecting to the FC SAN.

The impact of virtual servers
NPIV was initially designed to support multiple Linux partitions created by an IBM System z9 and to let admins assign each Linux partition its own virtual WWN. As NPIV evolved, it was standardized (the T11 standard) and now offers the following benefits:

  • Assignment of a specific virtual WWN to each virtual server
  • Easier creation of new virtual servers
  • Admins can move virtual servers to appropriate hardware
  • One physical HBA can support multiple virtual WWNs
  • Each virtual WWN has its own fabric login on the FC SAN
  • LUN assignment, quality of service and zoning done by virtual WWNs
The T11 NPIV standard establishes 256 as the maximum number of virtual WWNs that can be assigned to a port, but it doesn't require every FC HBA to support that many WWNs. This gives FC HBA vendors some freedom to create FC HBAs that support differing numbers of virtual WWNs. For instance, Emulex Corp.'s LP11000 will support approximately 100 virtual WWNs while its LP1150 supports only eight. Although the LP1150 is comparable in hardware architecture to the LP11000, the LP1150 has been tested with and supports a smaller number of OSes and virtual servers; this translates into the LP1150 costing approximately $200 less than the LP11000.

NPIV lets you permanently associate virtual WWNs with virtual servers. A problem with burning the WWN into the HBA (as was done with previous generations of FC HBAs), is that when an administrator upgrades a server or replaces a server's HBA, there are zoning and LUN masking changes that need to occur in the fabric to let that server's new HBA access the storage assigned to it. NPIV minimizes or eliminates the requirement to make any other changes to the SAN environment if a physical FC HBA needs replacement.

Permanently assigning virtual WWNs to virtual servers lets admins clone existing server images and quickly have new virtual servers operational with their unique but virtual WWNs. However, admins must still wait for virtual OSes like VMware to fully integrate with NPIV so that when a clone of an existing server image is made, VMware automatically creates new, unique virtual WWNs for the server clone.

In addition, permanently assigning virtual WWNs to virtual servers allows administrators to optimize what applications run on specific hardware and even when they run. For instance, if an application needs more or less processing power, the administrator can move the application to the most appropriate hardware with minimal or no SAN reconfiguration. This technology could also lead to organizations transparently moving applications throughout the day, so that during an application's peak demand period it can be moved to the hardware that provides the best performance while applications with less stringent requirements are moved to lower performing hardware.

NPIV can also segregate and prioritize I/O traffic by WWN. Administrators can set policies that recognize the virtual WWN associated with each virtual server so the fabric can prioritize and route a virtual server's traffic according to each application's requirements.

Hardware improvements
In addition to supporting multiple virtual servers with different application types and performance characteristics, new FC HBAs offer improved underlying hardware features, such as:

  • 4 Gbps FC link speeds
  • PCI Express bus architectures
  • Faster ASICs
  • More memory per port
Although 4 Gbps links at the FC switch level are still not commonplace, users may see a performance boost for some types of applications when using 4 Gbps FC HBAs. For instance, performance tests run by QLogic Corp. indicate that its 4 Gbps FC HBAs outperform its 2 Gbps FC HBAs by as much as a factor of five for small block traffic (under 8 KB block sizes) in 2 Gbps FC SANs under some types of application loads. These performance gains are most evident in Microsoft Exchange environments where small block sizes of .5 KB and 1 KB are prevalent. However, applications using block sizes of 8 KB or larger will see little or no performance gain when using 4 Gbps FC HBAs with 2 Gbps FC SAN switches.

A significant design improvement in a number of 4 Gbps FC HBAs is the use of the PCI Express I/O architecture. Earlier PCI and PCI-X architectures required FC HBAs to arbitrate for control of a shared bus architecture on servers. The PCI Express architecture eliminates the requirement for arbitration and introduces a switch-like architecture that allows the HBA to act as a high-speed interconnect between the server and the FC SAN. Atto Technology Inc.'s Celerity FC-44ES, Emulex's LPe11000, Hewlett-Packard (HP) Co.'s FC2142SR, LSI Logic Corp.'s LSI7104EP and QLogic's SANblade QLE2460 all have the PCI Express architecture, but these HBAs can only be used in newer servers that support PCI Express.

The performance of onboard HBA ASICs has been improved to keep up with the new switching technologies. Emulex reflects what most FC HBAs vendors are doing: Its dual-port LPe11002 Zephyr controller has two ARM11 processors that use less memory but deliver higher performance than earlier generations of processors; a chip is dedicated to each channel. Each chip also has a separate flash memory. That means users can have separate flash memory loads and even run different versions of the flash memory if called for by a specific application configuration.

The total integration of NPIV with servers and network operating systems is still at least two years out, but there are current alternatives to NPIV for organizations that want to more fully realize the benefits of a virtualized environment.

Cisco Systems Inc.'s VFrame technology allows users to virtualize server and storage resources, and map diskless servers to shared pools of storage according to application requirements. This technology eliminates the need for Ethernet NICs and Fibre Channel (FC) host bus adapters on servers but requires the deployment of an InfiniBand infrastructure that handles both TCP/IP and FC traffic. Administrators must also install a software agent on each server, as well as the Cisco SFS 3012 Multifabric Server Switch that pools and manages storage resources.

Scalent Systems Inc.'s Virtual Operating Environment (V/OE) lets users virtualize their existing infrastructure and even their existing virtual operating systems such as VMware without the need to introduce new hardware or wait until NPIV is fully integrated. Scalent's V/OE requires the configuration of one, and ideally two, management consoles and an agent on each managed server.

V/OE then handles the virtualization and movement of each server's IP and worldwide name addresses as they're moved between hardware without any dependencies on NPIV.

Issues to consider
There are presently three major roadblocks to using an NPIV-capable FC HBA: NPIV is only available on the latest FC HBAs and vendors don't provide a way to upgrade previous versions of their HBAs. Secondly, NPIV requires the FC switch to recognize and manage multiple, unique WWNs logging into each FC port. Currently, operating systems on FC directors allow only one unique WWN to log into each port. This means that even if the HBA supports NPIV, attempts by the HBA to log into the fabric multiple times will fail.

To allow multiple logins, users must update their FC director's operating system to the latest version; this will permit multiple WWN logins on a port and route the traffic accordingly. For instance, Brocade Communications Systems Inc.'s SilkWorm products require Fabric OS Version 5.1.0, while Cisco Systems Inc.'s MDS directors require the firm's SAN-OS Software Release 3.0 (2). In addition, Cisco doesn't provide NPIV as a standard feature of its SAN-OS; users must purchase and license the NPIV feature separately.

The third and most significant roadblock is that no server OS natively recognizes and interacts with the NPIV API. FC HBA vendors estimate that full integration between the various server OSes and NPIV will occur in approximately two years. Until that happens, admins need to manually assign the virtual WWN in the OS to the FC HBA when moving applications to new server hardware or when cloning virtual servers. Although creating unique NPIVs is a matter of procedure when cloning virtual servers, moving applications to different server hardware will require application outages (see Alternatives to NPIV).

HBA, show yourself
Identifying which HBA is used by what application is key to avoiding application outages and troubleshooting app problems. However, with servers supporting multiple HBAs and applications rolling virtually from one server to another, identifying which HBA is in use, when it's in use and how it's used becomes problematic. HBA vendors are beginning to introduce features like beaconing, link-speed indicators and express bus modules to assist administrators in identifying, troubleshooting and replacing HBAs.

Beaconing allows administrators to send a signal from the FC HBA's management software to a specific HBA that causes an indicator light on the HBA to blink repeatedly to identify its location in the server. But before administrators can implement beaconing, they need to determine how the management software communicates with the FC HBA. FC HBAs from Emulex and QLogic can communicate in-band (over the FC SAN) or out-of-band (over the Ethernet network), and each approach has its drawbacks.

For beaconing to work in-band, the FC HBA's management server must have an FC HBA port attached to each FC SAN in which it will manage client FC HBAs. To communicate with the FC HBA over an FC SAN, administrators need to create a zone that includes the HBA WWNs of both the management and client servers. However, putting the management server's HBAs in the same zone as the client server may slow boot times for some server operating systems, such as Sun Solaris, as it communicates and identifies the role of the management server's HBA WWN in the zone.

Out-of-band management is equally problematic. Administrators need to install agents on each client server so that the management server can communicate with them. If the client servers are on different subnets or on restricted internal networks, admins need to create the appropriate routes and put the right firewall policies in place to allow the management server to communicate with the client servers.

Another problem admins can encounter with FC HBAs is establishing the exact speed at which each port on a multiport FC HBA connects to the FC SAN. To address this problem, FC HBA vendors are providing separate link-speed indicator lights for each possible speed at which each port of their FC HBAs can connect to the SAN. For instance, QLogic's quad-port SANblade QLE2464 provides 12 different link-speed indicator lights (three for each port on the HBA) to let admins determine the connection speed of each port on the HBA. (See Quad-port Fibre Channel host bus adapters on Storage's Web site.)

HBA vendors also need to minimize or eliminate the need to take a server offline when a faulty HBA needs to be replaced. QLogic's SANblade QEM2462 takes advantage of the PCIe ExpressModule specification that lets admins remove and insert HBAs in servers without any tools. However, the SAN zoning and LUN masking on arrays must still be updated with the new WWN of the replacement HBA unless they use NPIV.

As data centers become increasingly virtual, FC HBA vendors must make the management and trouble-shooting of their HBAs virtual. While beaconing and link-speed indicators are small steps in that direction, NPIV is a more important step to keep FC HBAs relevant in the emerging virtual data center. Although NPIV provides some short-term benefits, such as allowing admins to create and use virtual WWNs instead of static ones, the big benefits of NPIV won't come until it's fully integrated with server and network OSes in 2008.

About the author: Jerome M. Wendt is a storage analyst specializing in the field of open-systems storage and SANs. He has managed storage for small and midsized organizations in this capacity.

This was first published in November 2006