- HDD form factor flash SSD(s) as Tier 0 storage in a multi-tier NAS or storage array.
- HDD form factor flash SSD(s) as all-SSD NAS or storage array.
- PCIe flash SSD storage card(s) or HDD form factor in a caching appliance on the storage network (TCP/IP, SAN or PCIe).
4. HDD form factor flash SSD(s) as Tier 0 storage in a multi-tier NAS or storage array.
Tier 0 storage is similar to HDD form factor flash SSD storage as cache. The difference is in how that HDD form factor flash SSD is treated. It makes flash function as the high-performance storage tier or the storage location for the hottest accessed data. It is also designated as the target for data associated with applications requiring very quick response time and low latency. As the data on Tier 0 ages and access becomes less frequent, auto-tiering software moves the data to a lower-performing, lower-cost HDD storage tier.
Pros: Reduces latencies from applications to shared storage. It works well with virtual servers, VM portability and VM resilience. It’s shareable among multiple physical and virtual servers. It requires no server resources. Lower TCO per GB than the PCIe form factor. It can redistribute workloads in a manner that reduces the total number of HDDs without compromising performance or capacity. Capacity is shifted to slower higher capacity HDDs while performance requirements are pointed at Tier 0 HDD form factor flash SSDs.
Cons: Capacities are limited similarly to HDD form factor flash SSD cache. As working sets grow along with general data growth, there is diminishing ability for that limited Tier 0 to keep up with demand. More applications and users will be pointed at slower storage tiers. Users experience increased latencies and excessive response times. Similar to other implementations, HDD form factor flash SSDs only benefit the storage system in which it is installed. The most severe performance bottleneck is commonly the storage controller, increasing latency and user response times. Auto-tiering software, which can be costly although not always, adds to the storage system’s controller utilization load, further impinging on performance. Auto-tiering software tends to move data only in a downward direction (Dell Compellent and XIO Hybrid ISE are distinct exceptions).
Best fits: Well-suited for virtual servers and VDI. Good at providing a boost to heavy traffic applications such as email. Quite good at accelerating databases where indexes and hot files are never moved out of Tier 0.
5. HDD form factor flash SSD(s) as all-SSD NAS or storage array.
Implementing a pure-HDD form factor all-flash SSD storage system provides much lower latencies, higher IOPS, and throughput while eliminating caching or tiering requirements. All-SSD storage systems have enormous performance and simplicity appeal. Most leverage SSD performance to include some form of data reduction (deduplication and/or lossless compression). System and flash speed are fast enough that applications and users typically don’t notice the additional data reduction latency.
Pros: Reduces latencies from applications to shared storage. One storage tier eliminates complicated storage tiering software. Works well with virtual servers, VM portability and VM resilience and shareable among multiple physical and virtual servers, consuming no server resources. HDD elimination prominently reduces power and cooling. Combining power and cooling savings with data reduction capacity savings provides a net-effective GB TCO in line with many HDD storage systems, whereas cost per IOPS or throughput is conspicuously better.
Cons: Scalability tends to be limited to less than 500 TB raw storage and some cases, much less (SolidFire is an exception that scales to a petabyte of raw storage). The bottleneck with this type of storage system is the storage controller utilization. As controller utilization elevates, so does latency and user response times.
Best fits: Well-suited for virtual server and VDI environments. Good at providing a boost to virtual environments and heavy traffic applications such as email. Does a good job at accelerating databases. Any application requiring a lot of performance and capacity that’s more than can be found in either caching or tiering is a good fit. Data centers with limited power and cooling availability are also a good fit.
6. PCIe flash SSD storage card(s) or HDD form factor in a caching appliance on the storage network (TCP/IP, SAN or PCIe).
Caching appliances sit non-disruptively on the storage network logically between clients and the storage systems. Caching appliances are primarily read and metadata for NAS only. Caching appliances are loaded up with either PCIe flash card SSDs or HDD form factor SSDs. Capacities tend to be less than 30 TB. These appliances are purpose-built for caching. There are four different types of caching appliances:
- Dumb (severely limited storage system software such as snapshot, thin provisioning, data reduction, replication, etc.) non-app aware block-based acceleration approach (Violin, Texas Memory Systems, EMC Thunder, Astute Networks).
- File-based dumb non-app aware variation (Avere).
- IP network intelligent packet inspection that caches appropriate data to the appliance (CacheIQ).
- File-based application read and metadata acceleration caching appliance (Alacritech ANX and NetApp FlexCache-SAA). Files are stored on the appliance based on read frequency. As frequency declines, they’re removed. Metadata also kept on the appliance. Typically, this type of caching produces the lowest file latencies.
Pros: Reduces latencies from applications to shared storage. Caching appliances are the most leverageable and shareable with physical servers, virtual servers and multiple storage systems. The file-based application read and metadata acceleration approach reduces NFS and TCP/IP latencies, making both the reads and metadata a lot faster.
All of the appliance types reduce controller load on the backend storage systems, enabling more of backend storage controller cycles for modern-day storage functions, improving overall performance. TCO tends to be the lowest with this type of flash SSD while the overall cost per IOPS or throughput is equivalent.
Cons: Scalability tends to be less than 10 TB raw storage in some cases. It is another system that sits between servers and storage, making troubleshooting a bit more complicated. For file caching, works better with NFS than CIFS.
Best fits: Well-suited for virtual server and VDI. Ideal for lowering overall storage costs while increasing IOPS and throughput. Good fit for HPC (block), rendering (file), genome and protein sequencing.
One final note: “One size or type of flash storage does not fit all.” Be prepared to implement different flash storage variations to solve different problems and application requirements.
About the author: Marc Staimer is the founder, senior analyst and CDS of Dragon Slayer Consulting in Beaverton, Ore. Marc can be reached at email@example.com.
This tip was originally published on SearchSolidStateStorage.com.
This was first published in April 2012