PanFS with RAID 6+ delivers a paradigm shift for network attached storage. While traditional NAS products decrease in reliability and availability as they scale, Panasas storage actually increases storage reliability and availability with scale. Built on an intelligent per-file distributed RAID architecture, the limitations of traditional RAID are completely overcome. PanFS delivers fast rebuild times to minimize data risk, efficient RAID performance that accelerates time to results, and a file system that remains available even after simultaneous triple disk failures that would have taken down lesser systems.
PanFS with RAID 6+ is a revolutionary advancement in reliability at scale. Extremely high reliability is achieved through triple parity data protection and an intelligent per-file RAID architecture that increases reliability with scale. Panasas scale-out NAS protects against two simultaneous storage blade failures while providing supplementary protection against single sector hard drive errors. Per-file RAID also improves time-to-recovery by reducing the amount of data to recover in the highly unlikely case that fault tolerance is exceeded.
The features of PanFS RAID 6+ go beyond triple parity and per-file RAID reliability, providing value far beyond that found in traditional NAS products:
|PanFS RAID 6+: Features and Benefits|
|RAID 6+ triple parity data protection||No data loss with two simultaneous storage blade failures. Nearly instantaneous repair of single and some multi-sector drive failures.|
|Per-file distributed RAID||Data protection is implemented on a per-file basis. Data reliability increases (not decreases) with the scale of the ActiveStor appliance.|
|Triple-mirrored small files||Highest level of protection provided for small files, with flash storage speeding rebuilds.|
|Linearly scalable parallel rebuild performance||All director blades work in parallel to quickly rebuild and restore data protection. Helps deliver reliability at scale.|
|Extended File System Availability (EFSA)||Preserves file system availability and helps ensure business continuity, even after a triple drive failure when traditional NAS systems would be down.|
|Scalable client-side RAID Engine||RAID performance scales with the size of the client environment. Eliminates hardware RAID bottlenecks.|
Panasas delivers revolutionary enterprise data reliability and availability by providing triple parity data protection in an innovative way:
Horizontal Parity – Using RAID 6 dual-parity erasure codes, data and parity are spread across multiple physical blades so that if up to two blades fail simultaneously, the data can be rebuilt from other blades. The per-file RAID architecture means that only the file components on the failed drives have to be rebuilt (as opposed to entire drives at a block level). PanFS greatly reduces the likelihood of simultaneous drive failures with exceptionally fast rebuilds and an architecture that minimizes the chance of multiple blade failures affecting a given file.
Vertical Parity – Using supplemental single parity protection, vertical parity provides advanced protection against sector failures within each storage blade. This is important because most double failures occur after hitting single sector errors during reconstruction in traditional storage products. Vertical parity is the solution, enabling fast sector recovery without the need to completely rebuild storage blades via Horizontal Parity. Vertical parity is also used to detect and correct single bit flips on the hard drives during background data scrubbing procedures.
This RAID 6+ triple parity protection provides more than 150 times better reliability than equivalent dual parity approaches without a significant degradation in performance.
PanFS RAID is implemented by applying RAID erasure coding algorithms on a per-file basis. The architecture protects against physical device failure by spreading fragments of files across storage hardware elements. Large files are typically striped across 8 to 11 storage blades and then scaled with additional stripes until it is spread across all storage blades for optimum performance. Since the data protection is implemented within a given stripe, files are inherently shielded from the failure risks of hundreds or thousands of drives at scale. PanFS RAID 6+ is highly efficient, requiring roughly only a 25% capacity overhead.
Isn’t reliability supposed to decrease as storage system is scaled? Not with PanFS per-file distributed RAID. Here’s why: In a 10 drive system (figure 1), file components are distributed across ten drives with eight drive striping for large files and triple mirrored small files. When three drives fail, the impact to files is substantial. File 5 cannot be rebuilt and must be restored from a backup. Files 1 and 4 have dual drive failures and File 3 has a single drive failure. Only File 2 is unaffected. The large impact to files results from many files sharing the same limited number of drives.
Figure 1: Panasas system with ten drives
The impact to the same five files is dramatically better after scaling out to a 20 drive system (figure 2). While maintaining the same stripe width as the 10 Drive System, per-file RAID disperses file fragments across the larger system such that multiple drive failures are less likely to touch a given file. Although three files are affected with three drive failures in the 20 drive system, there are no dual failures for a given file – and all files can be rebuilt. There are no files to restore from backups.
Figure 2: Panasas system with 20 drives
As a result, PanFS per-file distributed RAID inherently increases reliability with scale. PanFS also increases reliability with linearly scalable parallel rebuild performance, triple parity, triple mirrored small files, and Extended File System availability.
All small files are triple-mirrored under PanFS, providing them with an exceptional level of data protection. Each copy resides on a separate storage blade, normally on Solid State Disk (SSD). As a result, two simultaneous blade failures related to a given small file will not cause loss of data. When a mirrored copy of files must be rebuilt, the rebuild occurs rapidly at flash speeds to reduce the amount of time in degraded mode. By keeping small files on SSDs in three independent mirrors, the reliability of these files is exceptionally high. File availability is enhanced through exceptional rebuild performance – even for millions of small files. With fast rebuilds, the chance of one too many hardware failures is made even lower.
Traditional block-based RAID approaches are blind to the files stored on particular drives, so a lengthy rebuild of all blocks on the affected drive is triggered to restore fault tolerance – including blocks not related to a repairable file (and unused blocks not storing anything yet). As a result, even rebuilding one hundredth of one percent of disk data can require 100% of a disk to be rebuilt. With currently shipping drive sizes, this can take days or weeks on traditional RAID arrays as opposed to minutes and hours with PanFS.
PanFS RAID substantially reduces rebuild time by minimizing the scope of rebuilds to specific files that were affected and by rebuilding in parallel with rebuild performance scaling linearly with the size of the system. This addresses the industry-wide RAID issue of long rebuild times for large capacity disk drives (4TB, 6TB, 8TB, and beyond). PanFS per-file RAID first reduces the rebuild job to only those files that require repair. Files consist of multiple component objects stored across multiple object storage devices (OSD’s). When a sector error on a given OSD prevents access to a file component object, the affected file is known to PanFS and can be rapidly and efficiently rebuilt to known good sectors.
Parallel data channels to storage blades accelerate rebuild performance far beyond what is attainable with traditional block-based RAID systems. As system capacity scales and more shelves are added, the director blade on each additional storage shelf joins other director blades to perform rebuilds in parallel. The result is linearly scalable rebuild performance – growing from a single shelf to twenty shelves of ActiveStor storage will yield 20x faster rebuild performance. Also, RAID rebuild performance is efficient as per-file RAID only has to rebuild what is affected for a given file. More often than not, this will be a faster RAID 5 rebuild to restore one missing object even after two simultaneous failures, and many files will not have been affected at all—especially at scale.
With dual or triple parity protection, catastrophic storage failures don’t occur often. However, when they do occur, the impact can be devastating and traditionally often requires a full file system restore. This is not something a storage administrator with hundreds of terabytes, or more, of data ever wants to contemplate. PanFS uses Extended File System Availability (EFSA) to enhance business continuity by keeping the file system online when lesser systems would be shut down. In RAID 6+, file system directory data is quadruple mirrored, protecting the namespace one degree deeper than the data. This allows the file system to remain available and usable even after a simultaneous triple drive failure. With the file system still online, PanFS can typically provide the storage administrator with a specific list of files to be restored– which is typically a very small list (approximately one in 15 million files for a system with 200 drives). After restoring those specific files, full file system availability and reliability can be regained. Without the benefits of RAID 6+ per-file data protection and EFSA, IT is often faced with a much larger restore that can take days or weeks to complete.
Unlike legacy RAID approaches, the PanFS architecture also linearly scales the performance of its RAID engine with the number of compute clients. When accessing the file system via the DirectFlow protocol, each client uses a small amount of a CPU core as a RAID parity engine. When a Linux cluster is scaled to hundreds or thousands of nodes, it provides very high RAID capability while the amount of CPU utilized is negligible. This drives exceptional RAID performance and eliminates the bottlenecks found in legacy hardware RAID controllers. As a result, RAID performance is freed to scale linearly with application resources.