Happy birthday, RAID! Twenty-five years ago, in March 1988, Panasas founder and chief scientist, Dr. Garth Gibson, published the paper “A Case for Redundant Arrays of Inexpensive Disks (RAID)” with co-authors David Patterson and Randy Katz, inventing a concept that would prove central to the storage industry for decades to come. Congratulations to all three of these storage visionaries!
For those of you who have not yet viewed them, Garth recorded a series of three interviews on the history, present, and future of RAID last year. If you’re interested in how RAID has evolved in the last 25 years, I highly recommend listening to them.
In this blog post, I’d like to address the unnecessary confusion concerning RAID that some storage vendors are creating by claiming that their systems are superior because they “don’t do RAID” and instead use “erasure codes” to protect data. This is a very strange case to make since RAID and erasure codes are so tightly linked. In fact, what these vendors should be stating is that their RAID algorithms (often based on Galois Field Arithmetic implementations like Reed Solomon) are implemented in software on a per-file basis, offering a number of indisputable advantages over legacy per-device based approaches.
Let’s clear up the confusion. First, start with a basic definition of an erasure code. An erasure code is a type of algorithm for data protection that allows you to recover from one or more failures, providing that you know which element has failed. RAID 4, 5, and 6 are all examples of erasure codes. Panasas dual parity based on two independent applications of RAID 5 algorithms is also erasure code based.
The second part of the confusion appears to be that some people consider RAID to be a concept that only applies to the protection of entire physical devices (i.e. hard drives). While the original RAID paper does not cover per-file RAID, it does explicitly state that although the examples in the paper assume a hardware-based approach, implementing the algorithms in software could be a superior approach depending on the circumstances.
Fast forward twenty-five years and this is precisely what has happened. Software-based approaches are proving to be architecturally more flexible, enabling highly scalable RAID implementations. This is ideally achieved by applying RAID algorithms on a per-file basis, employing erasure codes to protect against physical device failure by spreading fragments of files across storage hardware elements. At Panasas, we believe this should still be called RAID as the purpose has not changed (protect against failures) and RAID algorithms (erasure codes) are still largely the same ones being applied to protect data. The difference is that the RAID algorithms are being more cleverly applied by performing them in software, on a per-file basis.
Per-file RAID is especially valuable when implementing RAID protection for scale-out NAS solutions like Panasas ActiveStor® with its fully integrated PanFS® parallel file system. The Panasas architecture benefits strongly from its software-based, object RAID implementation:
Despite being in an industry characterized by rapid change, if anything RAID is more important today than it was when it was invented twenty-five years ago. Rather than try to make an artificial distinction between RAID and erasure codes and cause unnecessary confusion in the market, vendors would be better served by focusing on the advantages of their per-file RAID implementations. After all, RAID remains the key concept that enables pretty much all credible storage vendors to deliver high quality data protection today.
So again, happy birthday RAID! While I’m quite sure the storage landscape will look very different from now in another twenty-five years, I feel quite confident that the core RAID concepts will still be in use, helping many new generations of products deliver high reliability and availability to markets we can only dream about now.