Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
Pivot3 adds RAID protection to serverless systems
Pivot3 is upgrading the Pivot3 Serverless Computing platform it launched last August. Pivot3's Serverless Computing eliminates physical application servers by moving them onto its x86-based storage nodes running an open-source Xen hypervisor. Now Pivot3 is adding what it calls RAID 6e – which allows three simultaneous drive failures or one appliance and one drive failure – while bumping up maximum capacity from eight appliances (96 drives) to 12 appliances (144 drives). It's also adding network-attached storage (NAS) capability through Microsoft Windows Storage Server. Each array now scales to 48 server cores for 24 Gbps of iSCSI bandwidth.
RAID 6 allows a system to recover from two simultaneous drive failures. The ability to lose an extra drive is key for Pivot3, which plays mostly in the video surveillance space and has customers with many high-capacity SATA drives. Lee Caswell, Pivot3's chief marketing officer, said RAID 6e requires no more drives or capacity than RAID 6.
Jason Pritchard, integration manager for the Choctaw Nation, estimates he saved approximately $300,000 by getting rid of around 90 application servers after installing Pivot3 systems in three Choctaw casinos across Oklahoma. Pritchard said he's installing one of the new systems this week in a new casino in Grant, Okla. Choctaw casinos have more than 2 PB overall on Pivot3 Serverless Computing systems, which store data from the casinos' video surveillance cameras.
"I'm looking forward to RAID 6e and to not having to worry so much about drive failure," Pritchard said. "With RAID 5, if we lost two drives we were in a hole. With our old [Adaptec Inc.] DAS [direct-attached storage] storage banks, it took 24 to 36 hours to rebuild and re-initialize a drive. With Pivot3, the rebuild is a whole lot faster. It takes an hour or so to rebuild, and we can lose multiple drives across a cluster and keep recording."
Because the casinos are scattered throughout the state, Pritchard said any downtime can result in lost video.
"Sometimes we lost a day's amount of video with our old storage because it takes three hours to get somebody up there to the casino," Pritchard said. "Some of that stuff comes back to bite you on the butt later on. It's Murphy's Law that if something happens and you need to look at a video, it will be on that server that was down."
Seanodes enters SSD market
Seanodes is widening its support to hardware with solid-state drives (SSDs). Its underlying server nodes can theoretically hold up to 48 SSDs or hard disk drives, but Seanodes business director Frank Gana said customers are more likely to deploy 2 SSDs per Gigabit Ethernet (GbE) I/O controller for maximum performance benefit.
From there, up to 128 CPU nodes can be aggregated together with Exanodes, and all the SSD capacity in that pool can be accessed from any host attached to it. The rest of the drive slots inside a server can be filled with hard disk drives in a separate pool. Exanodes aggregates not only the capacity of the SSDs, but the network bandwidth of the cluster. So a 16-node configuration, for example, could get up to 16 Gbps of bandwidth. Gana argues that's an improvement over putting a solid-state drive behind an array controller, which some in the storage market have argued defeats the performance gains from SSDs.
Some vendors supporting solid-state drives also include automated tiered storage functions for moving data between SSD and hard disk drive pools, but Seanodes said its customers can use third party tools such as VMware Inc.'s Storage VMotion for that.
Jeff Boles, senior analyst and director of validation services at Hopkinton, Mass.-based Taneja Group, said the ability to pool the capacity of solid-state drives could lower the cost and lets organizations add more capacity per drive – something that's on customers' SSD wish list.
"Compared with array-based SSDs that may have a per-device markup of anywhere between five times to 20 times, SSD in the server costs a fraction of that," Boles wrote in an email. According to a Seanodes demonstration, an eight-node cluster with one SSD per node can offer up to 36,000 IOPS for less than $20,000 (for this demonstration, the servers didn't contain hard disk capacity).
Boles maintained that the major array vendors will have to start thinking more along these lines soon to stay competitive.
"Solutions from the likes of Seanodes, [Dell Inc.'s] EqualLogic, Atrato [Inc.] and others are making SSD more scalable, more affordable and consequently more useful than it is in classic arrays," he wrote. "When we're seeing the street prices of SSD devices marked up by 10 to 20 times by the time it is integrated into an array, and then the number of devices are limited as well, I think more end users are going to be easily attracted by what these non-traditional solutions can deliver."