Disaster recovery (DR) for files and folders is generally simpler than disaster recovery for applications because you don't have to consider issues like application consistency, transaction integrity and application dependencies.
The challenge with disaster recovery for file-based content is mostly a problem of volume and size. Companies may have tens or hundreds of terabytes of file data, so determining what needs to be included in the disaster recovery plan can be a daunting task.
Some companies have turned to data classification tools to determine the value of data and its appropriate disaster recovery tier. Data may be classified using a variety of tools:
- Storage resource management (SRM) tools typically classify files by meta data such as file type, size and modification date. An example is the Hewlett-Packard (HP) Co. Storage Essentials File System Viewer module, which allows files to be grouped by various file properties.
- Archiving tools have built-in classification and tend to go beyond just meta data to include full content indexing. Symantec Corp.'s Enterprise Vault and archiving products from C2C Systems Limited are examples.
- Data-loss prevention tools detect and prevent the unauthorized transmission of information and include data categorization capabilities. They're available from McAfee Inc., RSA (the security division of EMC Corp.) and Symantec, among others.
- Standalone classification tools, available from companies like Abrevity Inc., Kazeon Systems Inc., Njini Inc. and Permabit Technology Corp., can be used to categorize data to determine the appropriate DR tier.
This article originally appeared in Storage magazine.
About this author: Jacob Gsoedl is a frequent contributor to "Storage" magazine.
This was first published in June 2009