Article

Users urged to test storage devices for better data protection

Dave Raffo

SAN FRANCISCO – Regular testing is necessary for keeping hardware, software and business processes tuned for optimal performance, speakers emphasised during the opening day of Storage Decisions.

Brian Garrett, Enterprise Strategy Group lab technical director, provided recommendations for testing and tuning storage systems, while Glasshouse Technologies storage consultant Jeff Harbert covered the challenges of testing an organisation's backups and restores.

Garrett told attendees that the most important task for testing and tuning is to balance workloads over the right number of drives protected with the appropriate RAID level. "The disk drive is the slowest component in a computer," he said. "Spreading the workload over a large number of drives can solve most performance problems."

He recommended testing when evaluating a system for purchase, changing an existing system or determining if the system is well-tuned after a change. Different tests can be used to determine which system is faster among like systems, which system does more work in parallel and which system has the best price/performance.

Transactions, response time and IOPS
The key metrics to measure are transactions (i.e., number of applications that can be supported at an acceptable response time, transactions per second, email database operations per second, file operations per second), response time (speed of each I/O measured in milliseconds), throughput (how much data

Requires Free Membership to View

can be moved through a storage system) and IOPS, Garrett said.

Measuring IOPS, he said, "doesn't tell you about real-world applications, but gives you an idea of the storage engine of a system."

Backup testing is not easy. Recovery testing is not easy. It requires a lot of resources, and most people don't have the time to do this.
Jeff Harbert
Storage consultantGlasshouse Technologies
The storage performance tools that give the best indications of real-world performance are harder to use yourself, such as TPC-C for databases, LoadSim for email, SPC-SFS for NAS and SPC for block storage, Garrett said. These tools generally require a lot of equipment, and tests are often conducted by vendors or third-party labs.

But there are other easier-to-use tools that can help administrators compare how storage systems handle certain applications or workloads. These tools don't match the real-world performance of the others, but often are built into applications or available online. They include Jetstress for email, Iometer for block-based storage and IOzone for file shares.

Garrett suggested that when looking to buy a storage, an admin should request industry-audited benchmark results from vendors. He also said that price/performance is more important than raw performance numbers.

Some good testing tools were available in products, according to Garrett. For example, bandwidth monitoring capabilities built into Fibre Channel switches from Brocade and Cisco indicate whether an upgrade from 4 Gbps to 8 Gbps devices will help. "They tell you how much bandwidth is being used," he said. "If it's about 15%, you probably wouldn't benefit from going up to 8 Gb. If it's saturated at around 60%, you'll get more bang for your buck from an upgrade."

He also pointed to storage management products, such as Akorri's BalancePoint and Virtual Instruments' NetWisdom, as useful for monitoring performance of systems in-house.

Backup testing: Don't forget restores

Jeff Harbert said that backup testing is a crucial step in making sure critical data is protected, although many organisations don't do it enough. . .or at all. While backup testing is not as complicated or expensive as disaster recovery testing, it does require a concerted effort from administrators and management buy-in, he said.

"Backup testing is not easy," Harbert said. "Recovery testing is not easy. It requires a lot of resources, and most people don't have the time to do this."

Harbert advised testing the entire backup and restore process to make sure that data can be recovered in case of an outage. It's not enough to know the data was backed up successfully. "If 50% of backups failed, but 100% of restores worked, would anybody notice?" he asked. "If 100% backups were successful, but only 50% of restores worked, that would be a bigger problem."

Tests should determine if backed up data is recoverable, if the right technology is being used and if the performance is sufficient, Harbert said. Tape and disk backup products need to be tested, as well as data deduplication and encryption (if they're being used). He recommended testing new devices at the time of implementation and said that the results of tests should meet application owners' requirements.

Harbert also said that administrators should take a "fire drill approach" to backup testing. They should randomly select applications and associated servers for a recovery test, have a third-party observer help track the process and give no more than three days notice to participants.

"If you can tell me when a test is going to happen, it's not really a test," he said. "You'll tweak things so the test works. That is not the real world."