Storage monitoring tools cut hospital's storage performance problems


Storage monitoring tools cut hospital's storage performance problems

When storage performance problems affect surgical-care applications and electronic medical records, lives can be at stake. So when his storage-area network (SAN) experienced several outages, Chris Carlton, storage lead at a large health care provider, knew he needed a new system and the storage monitoring tools

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Safe Harbor

that would allow him to get ahead of any potential crises.

Carlton put together a storage infrastructure that included Hitachi Data Systems Universal Storage Platform (USP) V and a Brocade DCX Backbone Fibre Channel switching platform approximately three years ago. During the SAN implementation, he installed Virtual Instruments Traffic Access Points (TAPs) throughout the system and adopted Virtual Instruments’ VirtualWisdom optimizer for VMware environments. He also implemented Virtual Instruments' ProbeFCX packet analyzer, Rover physical layer switches and Protocol Analyzer for performance monitoring and to improve his storage system and virtual server performance.

Carlton said his environment is approximately 20% virtualized on VMware Inc.’s vSphere 4.1 platform. In addition to his health care applications, he runs a number of typical applications in his virtual environment, including Microsoft SQL Server and SharePoint. His physical hosts carry between seven and 35 virtual machines (VMs).

Medical records can be enormous files, and Carlton has to be ready when his users quickly eat up his capacity and I/O resources. “There’s no control mechanism on some of these applications,” he said, “so somebody can go crazy and start scanning data at a high resolution. I’ve lost a terabyte of capacity in two days, just like that. So we try to keep ahead of that as best we can.”

His total storage capacity will reach 1 petabyte (PB) soon, and he’s using between 50% and 60% of it.

Carlton also has to watch for performance problems caused by equipment damage. “Those are the things that bite us in the back most of the time because we don’t know they’re problems until they’ve gotten to a severe point,” he said. “You think these problems are minor until it’s the middle of the night and you're in the car driving to work.”

Carlton uses the tools from Virtual Instruments to watch his host bus adapters (HBAs), small form-factor pluggable (SFP) optical transceivers, physical host connectivity and I/Os per second (IOPS). He runs reports that show if any SFPs are performing outside expected ranges. If they don’t meet the manufacturer’s guidelines, he replaces them before they cause trouble. “It’s made a huge difference for us,” he said.

His team also uses NetApp’s SANscreen Service Insight storage management tools, as well as Hitachi Data Systems’ Device Manager and Capacity Reporter.

Recently, Carlton eliminated a major SCSI reservation nightmare that was driving him crazy by throwing too many daily alerts. Hitachi installed the VMware vStorage APIs for Array Integration (VAAI) and it’s made a big difference. “A month ago you would have gone in there and seen all the SCSI reservation conflicts,” Carlton said. “I haven’t done the numbers, but my gut tells me we haven’t had an alert since we implemented the VAAI code except for on weekends when [the virtualization team] does migrations for patching.”

This article was previously published on

This was first published in April 2011