Contact Sales

RSS

Celebrating 25 Years of Using Erasure Codes to Implement RAID

Geoffrey Noer

Sr. Director of Product Marketing

Happy birthday, RAID!  Twenty-five years ago, in March 1988, Panasas founder and chief scientist, Dr. Garth Gibson, published the paper “A Case for Redundant Arrays of Inexpensive Disks (RAID)” with co-authors David Patterson and Randy Katz, inventing a concept that would prove central to the storage industry for decades to come. Congratulations to all three of these storage visionaries!

For those of you who have not yet viewed them, Garth recorded a series of three interviews on the history, present, and future of RAID last year. If you’re interested in how RAID has evolved in the last 25 years, I highly recommend listening to them.

In this blog post, I’d like to address the unnecessary confusion concerning RAID that some storage vendors are creating by claiming that their systems are superior because they “don’t do RAID” and instead use “erasure codes” to protect data. This is a very strange case to make since RAID and erasure codes are so tightly linked. In fact, what these vendors should be stating is that their RAID algorithms (often based on Galois Field Arithmetic implementations like Reed Solomon) are implemented in software on a per-file basis, offering a number of indisputable advantages over legacy per-device based approaches.

Let’s clear up the confusion. First, start with a basic definition of an erasure code. An erasure code is a type of algorithm for data protection that allows you to recover from one or more failures, providing that you know which element has failed. RAID 4, 5, and 6 are all examples of erasure codes. Panasas dual parity based on two independent applications of RAID 5 algorithms is also erasure code based.

The second part of the confusion appears to be that some people consider RAID to be a concept that only applies to the protection of entire physical devices (i.e. hard drives). While the original RAID paper does not cover per-file RAID, it does explicitly state that although the examples in the paper assume a hardware-based approach, implementing the algorithms in software could be a superior approach depending on the circumstances.

Fast forward twenty-five years and this is precisely what has happened. Software-based approaches are proving to be architecturally more flexible, enabling highly scalable RAID implementations. This is ideally achieved by applying RAID algorithms on a per-file basis, employing erasure codes to protect against physical device failure by spreading fragments of files across storage hardware elements. At Panasas, we believe this should still be called RAID as the purpose has not changed (protect against failures) and RAID algorithms (erasure codes) are still largely the same ones being applied to protect data. The difference is that the RAID algorithms are being more cleverly applied by performing them in software, on a per-file basis.

Per-file RAID is especially valuable when implementing RAID protection for scale-out NAS solutions like Panasas ActiveStor® with its fully integrated PanFS® parallel file system. The Panasas architecture benefits strongly from its software-based, object RAID implementation:

  • Per-file Object RAID. PanFS stores all files as a collection of objects, automatically selecting the optimum type of RAID protection based on the size and type of each individual file. Small files are mirrored in two objects, transitioning to striping across a larger set of objects as they grow in size. This allows PanFS to optimize for performance and capacity usage in a way that traditional block-based RAID implementations cannot do.
  • Scalable RAID Parity Performance. The parity computation in PanFS is performed in client-side driver code and scales with the number of accessing clients (compute servers). While the tiny amount of CPU and memory used on each client goes unnoticed, multiply this by hundreds or thousands and you’re looking at much higher performance that dwarfs what can be done by legacy hardware RAID controllers or by in-band NFS servers with software RAID.
  • File-based RAID Rebuild. RAID that protects block devices has to rebuild every block corresponding to a failed drive, including all of the unused capacity, because it has no file-level knowledge. PanFS only rebuilds the actual used capacity by rebuilding the files that had objects (parts of the file) present on the failed drive.
  • Scalable, Distributed RAID Rebuild Performance. Because the objects making up each file are spread evenly across a subset of the blades in a Panasas system, when a drive needs to be rebuilt, all blades in the system are read and written in parallel. This allows rebuild performance to scale in proportion to the size of the system, maintaining high reliability and availability even for very large deployments.

Despite being in an industry characterized by rapid change, if anything RAID is more important today than it was when it was invented twenty-five years ago. Rather than try to make an artificial distinction between RAID and erasure codes and cause unnecessary confusion in the market, vendors would be better served by focusing on the advantages of their per-file RAID implementations. After all, RAID remains the key concept that enables pretty much all credible storage vendors to deliver high quality data protection today.

So again, happy birthday RAID! While I’m quite sure the storage landscape will look very different from now in another twenty-five years, I feel quite confident that the core RAID concepts will still be in use, helping many new generations of products deliver high reliability and availability to markets we can only dream about now.