Oxford University rethinks data center storage

Article

Oxford University rethinks data center storage

Beth Pariseau, News Writer

The manager of a new supercomputing center at Oxford University said his latest storage choice, Pillar Data Systems' Axiom midrange array, reflects the fact that his high-performance

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

computing (HPC) lab is growing concerned with storage issues once the domain of corporate storage managers.

When it opens its new facility in August, the newly refurbished Oxford e-Research Centre (OeRC) will provide a central computing and data repository for researchers at each of the 39 colleges at the university. During the latest reinvestments in technology, according to center manager Dr. Jon Lockley, a new need for data center storage has emerged in a computing environment once solely concerned with CPU horsepower.

For example, "we have a particularly nasty chemistry application, which can easily produce temporary files of up to 2 terabytes," Lockley said, a load that caused the previous systems, which consisted of NFS file servers attached to a single 1 terabyte (TB) unbranded JBOD, to crash. Aside from the JBODs, the existing supercomputing center is also running EMC Corp.'s AX150 array.

Lockley said budgetary concerns were a factor in purchasing such small systems, but added that until recently, they had been sufficient, since supercomputing at Oxford remained focused on providing high-octane "scratch space" for research calculations.

Now, 2 TB chemistry files are just the tip of the iceberg as the new data center storage plan gets going. Further complicating matters is the fact that Lockley's team won't know exactly how many researchers or how much data they'll bring with them until the facility opens its doors.

"I've had people come up to me in the corridor and talk about how great it is we're getting this new system, then ask me if I have 20 terabytes," Lockleysaid. "We are probably going to have to add more than 100 terabytes before the end of the year."

Lockley declined to specify exactly which vendors' products he had evaluated prior to selecting Pillar's Axiom array, but said they were storage area network (SAN), network attached storage (NAS) and multiprotocol systems from "mostly legacy vendors."

The problem with the majority of products he looked at was that "there were too many products where we would have had to make a design decision from day one," in terms of storage capacity allocation and provisioning, he said.

Pillar's array, meanwhile, scales its controllers separately from disks, which Lockley said will give the university more flexibility as the new data center gets up and running.

A software feature that sold Lockley on the Pillar array is a part of the AxiomOne management console that allows him to create "what-if" scenarios and ascertain from the array how many controllers and disk bricks will be needed to maintain a predefined level of service.

"We're headed into the great unknown with this project," Lockley said. "We need all the help we can get when it comes to predicting our capacity and storage purchases."

When it comes to performance, Lockley said he plans to rely on Pillar's quality of service (QoS) feature to feed heavy files or high-transaction database data into the compute cluster, which will handle the heavy processing. He acknowledged that an enterprise midrange array isn't generally associated with high-performance computing, but said it's data management that will become the real issue for the university once the new system is up and running.

"Everyone's having a bit of a rethink about storage right now," Lockley said. "Data management is going to be the issue for us."

For example, Lockley said, backups for the system will now probably also fall under his purview. "We used to be able to tell people, don't count on us" when it came to protecting data, he said. "But that was back when all we sold was compute power."