MarketLive manages tier 2 storage with Scale Computing, Rsync replication
eCommerce software vendor MarketLive Inc. stores its most mission-critical Oracle database and website content data on a mix of NetApp and Hitachi Data Systems (HDS) disk arrays. An AMS1000 and several WMS arrays from HDS hold most of the company's Oracle database data, while log files and unstructured Web content is stored on four NetApp FAS3040 disk arrays. MarketLive uses scale-out NAS from Scale Computing for tier 2 data.
MarketLive's primary environment holds approximately 10 TB of usable capacity and supports about 300 servers that run a mix of Windows and Linux operating systems. Boos said he's been happy with NetApp's performance for primary storage, but as his data grew, he began looking for a cheaper tier 2 disk array. "NetApp works," Boos said, "but it's expensive. I think NetApp is a software company that just happens to sell hardware. You start with about $100,000 for the raw disk and NFS, but then if you want to do CIFS, it's another $30,000, snapshots another $30,000. You pay two to three times for software licenses what you paid for hardware."
MarketLive became an early adopter of startup Scale Computing's NAS -- based on IBM's GPFS scale-out NAS -- approximately 18 months ago, and finally put it into production about nine months ago.
Because NetApp's replication tools only work with other NetApp arrays and Scale has yet to add native replication of its own, Boos said he relies on an open-source tool called Rsync to send files from the primary disk array to the three-node 3 TB Scale Computing cluster. Boos said he's had to see Scale Computing through its early growing pains, but the price is right: approximately 20% of the cost of his primary data storage arrays.
"We can run NFS and CIFS in the same box without an additional license," Boos said, and he's enjoyed having input into a small, emerging vendor's product design. However, he said he's eager for Scale Computing to add distance replication and support for additional snapshots. "The biggest need we have is filer-to-filer replication across the wire and the ability to remotely snapshot that data at the secondary site for disaster recovery and testing," he said.
Orbital Sciences Corp. tiers NetApp with F5 file virtualization
Orbital Sciences, which designs and manufacturers communications satellites and equipment for manned space missions, began in 2006 with 33 TB of data under management. Today, with a typical data-retention period of 20 years — the time it takes for a product to be designed, launched, live out its lifetime and be retired — that number has grown to approximately 150 TB, according to senior director of information services Tom Hall.
Orbital Sciences, which had been using NetApp FAS3170s with 15,000 rpm 300 GB Fibre Channel (FC) disks for primary storage, added NetApp FAS3140s with 7,200 rpm1 TB SATA disks, along with a few FAS2020s and FAS3150s at remote sites for approximately 20% of the cost of the Fibre Channel disk arrays. Hall estimated Orbital Sciences won't have to buy tier 1 storage for another two years, and will avoid $1 million in capital expenditures over three years.
At its Dulles, Va., headquarters and a secondary data center in Chandler, Ariz., Orbital Sciences has FAS3140s and FAS3170s attached to an F5 Networks Inc. ARX file virtualization switch to smooth data migrations between tier 1 and tier 2 data. Hall said the ROI for this addition to the infrastructure in operational costs was calculated approximately 13 months.
Hall said the F5 ARX switch also gives him flexibility if he wants to add NAS from another vendor. "We just acquired the satellite division of another company, which is an EMC shop," he said. "We don't use ARX with that storage, but if we have other acquisitions, we can set ARX in front of any vendor's storage — that's something no single storage vendor is likely to provide."
There are also less-expensive tools for moving data between tiers, such as the Rsync utility being used by MarketLive, but Orbital Sciences' senior manager of information services Bryan Pretre said the true appeal of ARX is the ability to evaluate and sort data by type prior to migrations. "You can do migration with some other tool, but it won't tell you what data to move, and it won't do nondisruptive migrations," he said.
While ARX has performed well, even between high-performance server clusters and their back-end storage, there was some careful planning required to make sure the switch was configured correctly, Pretre said. "We talked to some other customers who'd been so granular with their data volumes it became difficult to manage them," he said.