News

Avere adds global namespace to NAS accelerator appliances

Sonia Lelii

Avere Systems Inc. added a global namespace capability to its FXT Series of tiered NAS accelerator

Requires Free Membership to View

appliances, making it easier to manage data across heterogeneous filers.

Avere's FXT clustered tiered storage devices work as a high-performance cache and reside between the client and heterogeneous NAS filers. Global namespace is part of the 2.0 release of the Avere Operating System. Avere CEO Ron Bianchini said the lack of global namespace complicated management of files between users and filers because administrators had to change mount points when files moved across filers.

The global namespace now gives a single namespace view across all the filers to unify NAS systems from one or more multiple vendors. The global namespace provides a logical abstraction layer between the file system and storage.

"Now all the client sees is the logical view, no matter what change is done on the physical side," Bianchini said. "All the user sees is the directory, and he can read and write to it. He mounts to NFS or CIFS and he sees that logical layer where all the filers and each file appear as a directory to the client."

He continued: "We support 25 filers [per cluster], and until this release we did it in a way that when a new filer was added, a new IP range was presented to users. Prior to [global namespace], you had numerous IP ranges. Now you have only one."

Marc Staimer, president at Dragon Slayer Consulting, said Avere Systems solves a major problem for end-to-end management of unstructured data. "From the end-user perspective, you can always see where the data is and that's a better management scheme," he said. "Avere provides much simpler management and eliminates the bottlenecks of other global namespace-type products."

Staimer said the new capability will also allow customers to maintain data online when they do a technology refresh.

"If the Avere system went away, you still can get hold of the data," he said. "You eliminate the concept of data migration or the concept of technology refresh. If you want to replace a NetApp filer with a new one, you can do it online. No server remediation, no mounting. It's end-to-end management made simple."

Avere's FXT Cluster Series first launched in October 2009. The series includes three models, the FXT 2300, FXT 2500 and the FXT 2700. Each FXT Series node contains 64 GB of read-only DRAM and 1 GB of battery-backed NVRAM for writes. The FXT 2300 contains 1.2 TB of 15,000 rpm SAS drives. The FXT 2500 contains 3.5 TB of SAS disk. The FXT 2700, introduced in first quarter of 2010, has 64 GB of read-only DRAM and 1GB of NVRAM for writes, but it contains 8 64 GB single layer cell (SLC) solid-state drives (SSDs) for a total of 512 GB of flash memory.

Each FXT cluster can support 25 filers. Avere's scale-out clustering capability automatically balances loads across the cluster. It supports N+1 protection for high availability and Gigabit Ethernet (GbE), 10 Gb Ethernet and TCP/IP.

"As you add nodes to our cluster, you get linear performance scaling," Avere's Bianchini said.

Avere previously provided statistics and analysis on applications, clients and filers through its management console. With global namespace, it now delivers those statistics for an entire cluster.

'WAN optimization' as a bonus

Sony Pictures Imageworks has been an Avere customer for more than a year, using the FXT cluster series for applications that render images for live action and animation films. Nick Bali, a senior systems architect at Sony Pictures Imageworks, said his company has 24 nodes of the FXT 2300 and 12 nodes of the FXT 2700 for a total of more than 8 TB of cache capacity.

Bali said his company runs the FXT cluster nodes for better NFS performance, but also employs the high-performance cache to minimize WAN latency and bandwidth use.

Typically, Sony Pictures Imageworks incurred a 20-millisecond-to-30-millisecond penalty for each individual round-trip read request from a remote site to the main data center, but has been able to reduce it to a one-time 1-millisecond-to-5-millisecond penalty for a read request.

"For us, that's WAN optimization," Bali said. "We put cache devices in the remote sites so that the compute nodes don't have to go to the main data center for each read. Data is read once from the data center and then placed locally on the remote site."

This article was originally published on SearchStorage.com.