News

UCSD lab studies future changes to non-volatile memory technologies

Carol Sliwa, Senior Writer

Due to blistering speed compared to conventional hard disks and NAND flash-based solid-state drives (SSDs), phase-change memory (PCM) is among the emerging non-volatile memory

Requires Free Membership to View

technologies with the potential to make a profound impact on enterprise data storage systems. But the dramatic performance boost will be of little use without sweeping changes to the design of storage systems, according to someone with early hands-on experience testing PCM technology.

Steven Swanson, director of the Non-Volatile Systems Laboratory at the University of California at San Diego (UCSD), built a prototype PCM to help study how next-generation non-volatile memory technologies will shape the future of computing systems. He said his tests showed that operating systems, file systems, databases and other software components need significant enhancements to enable PCM to live up to its potential.

“We’ve found that any piece of software that spends a lot of time trying to optimize disk performance is going to need significant reengineering in order to take full advantage of these new memory technologies,” Swanson said.

Jim Handy, founder and chief analyst at Objective Analysis in Los Gatos, Calif., noted that Fusion-io and Schooner Information Technology Inc. have been writing code optimized for NAND flash technology. But UCSD's Swanson envisions changes that are more far-reaching and critical for faster emerging memory technologies such as PCM, magnetoresistive RAM (MRAM) and resistive RAM (RRAM).

Backed by government grant money, the UCSD lab launched its project two years ago. The team built a prototype high-performance storage array called Moneta, named for the goddess of memory in Roman mythology.

Moneta initially emulated the fast non-volatile memory technologies using 64 GB of DRAM plugged into the dual in-line memory modules (DIMM) sockets of a BEE3 field programmable gate array (FPGA) prototyping system designed by Microsoft Research and sold through BEEcube Inc.

But the researchers got their hands on PCM chips from Micron Technology Inc. and made the nascent technology usable in the Moneta system. They designed PCM memory modules that look like DIMMs and FPGA-based controllers to manage them, using a hardware description language to write the “gateware” to run the modules and the controllers.

The team plugged 16 PCM cards into the BEE3 board’s DIMM sockets and used a PCI Express (PCIe) link to connect Moneta to the host Linux-based servers. They chose PCIe over traditional storage interfaces such as SAS and SATA because it’s widely available, more flexible and faster.

“The future of high-end solid-state drives is PCIe. All the highest performance SSDs are PCIe drives,” UCSD's Swanson said. “It’s much higher bandwidth. With PCIe, you can in theory go up to 16 or 32 gigabytes per second. With SATA 3.0, it’s 600 megabytes per second.”

PCM: Expectation vs. reality

But the UCSD lab’s early tests of the PCIe-attached PCM SSDs produced results that weren’t even close to the performance projections that the industry has touted. The PCM SSDs were only seven times faster than single-level cell (SLC) flash-based SSDs for small data requests of less than 8 KB, and up to 30% faster for larger data requests. In some cases, the PCM SSDs were even 50% slower than SLC flash with data requests of more than 8 KB, according to Swanson.

He said these results were a far cry from PCM projections of 50,000 times faster than disk and 1,000 times faster than flash.

“It’s not that the parts aren't performing the way they’re supposed to perform. It’s just that the technology hasn’t quite progressed yet to where the industry expects it to end up as the technology matures,” Swanson said.

Swanson attributed the less-than-stellar results to the early edition PCM chip’s slow NOR-like interface and the metal alloy layer’s slow transition between amorphous and crystalline states.

PCM works by passing a current through a metal alloy, known as a chalcogenide, to melt it, and then cools the material into a crystalline or glassy state to distinguish between the binary values of zero and one. In its current incarnation, the piece of metal is relatively large and requires more time to heat up and cool down than it will in the future, Swanson said.

“When the next generation comes out, it’s going to look much more like conventional memory,” he said, “and the interface will be much, much faster.”

Suggested areas of improvement

But the PCM chip and the interface aren’t the only things in need of improvement. The UCSD research team found that the file system and operating system spent a considerable amount of time trying to carefully plan how to access the hard disk to achieve maximum efficiency, Swanson said. He said his team eliminated the complex scheduling algorithms in the operating system to improve performance.

In databases, the buffer manager and the query optimizer also rely on input from algorithms designed to optimize performance for disks. They’ll also need a redesign to take advantage of the new memory technologies, Swanson said.

In addition, UCSD’s researchers located places where they could achieve greater efficiency by taking parts of the software and implementing it in hardware. For instance, they shifted the file system’s permission-check function.

The project team is trying to come up with new and more interesting commands, beyond simple read and write, to implement inside a storage array to help accelerate particular applications.

“For instance, you could imagine telling your storage device to go and look through a text file for some key phrase. It could do that much more quickly on its own than it would by bringing it into your server’s memory and searching through it with normal CPU,” Swanson said.

The UCSD lab’s research may eventually find commercial application. Swanson acknowledged that there have already been discussions with vendors about pieces of the technology they might want to license.

Swanson predicted that future enterprise data storage systems would bear little resemblance to the shared storage devices of today. He foresees, within the next decade, a melding of the direct-attached and network-attached models, possibly with storage cards inside each server, networked together over a high-performance interconnect.

“The current model where you have this network-attached storage device makes sense because you have more data than you could ever fit into one server and lots of machines need to access it. The latency to get to the storage is pretty high, but for disks, it doesn’t make any difference because disks are really slow,” Swanson said. “Some of these new technologies are much faster, and that’ll hurt a lot more. So we have to rethink how we build these storage systems.”

This article was previously published on SearchStorage.com.