Contact Sales

RSS

The World of SSDs According to Garth and IDC

Barbara Murphy

CMO

Panasas founder and chief scientist, Dr. Garth Gibson, and Steve Conway, research vice president in IDC's high performance computing group took some time recently to discuss HPC trends and forces affecting storage in the HPC market. The role that solid state technology plays in HPC storage, now and in future scenarios is discussed in this second of three video chapters. pNFS was covered in the first installment and big data will be explored in the third and final chapter (coming soon).

Panasas and IDC Solid State Disk discussion transcript:

Steve Conway: SSDs – where do you see those capabilities in the future having a play, even if it’s not certain exactly how that would go? What are some of the potential things that SSDs could help with?

Garth Gibson: SSD is “Solid State Disk” and that implies that the solid state part is represented by a disk. That’s not the only way that solid state can show up. We already are starting to differentiate the PCIe interface from the SSD interface whether or not you’re talking to it with a SATA command or whether you’re talking to it with something that is across a device driver. That’s going to get richer. There is the non-volatile memory architectures for being able to talk to it as something that is approaching DRAM in its abstraction. So we’re going to see a richer layering on that side which is going to allow software to get faster with respect to how much effort it has to put in to being durable. As durable gets easy, we can respond to latency faster.

On the other side of the world, the SSDs have undercut the disks in their highest margin marketplace, the place where they push the RPMs to the highest speed and gave the fastest disk-based random access. Maybe we’re talking 400-500 accesses per second on a disk, but that’s still so far short of an SSD’s 10,000 to 100,000. So, as that is undercut, the disks industries need to concentrate more and more on what they do best - aerial density. So, on that side, it’s going to push heat assisted magnetic recording (HAMR) out sooner, but it’s also going to draw in shingling or shingle magnetic recording. This is interesting because the shingle magnetic recording changes the way the drive is used by the user. It forces the writes to overlap previous writes and that means you cannot write at random, you have to write in large chunks. So we are going to see something like the SSD ware leveling issue where we have to overwrite large chunks of the disk. So the disks are going to get faster – yes, but mainly they’re going to get denser. And they’re going to have a lot more aerial density, that’s their primary value, lower dollar per gigabyte and I think that that’s going to lead to the drives having more challenges with low latency writes. We’re going to fix that by coupling hybrids of the big, cheap drives with the good aerial density and the SSDs as a metadata layer with small random access, maybe even a long term repository for data on its way to the disk; then at the high end, the almost DRAM type solid state, durable solid state, to make the latency particularly fast.

SC: Here’s a follow-on question that is speculative, but I think one of the trends we see long term is the systems both on the compute and storage side incorporating more intelligence, having more autonomic capabilities, self healing capabilities…and I would think that would be associated with a need for faster data movement, faster latency characteristics. Do you see SSDs playing a role there?

GG: SSDs make a lot of the hard problems easier because you don’t have to concentrate as hard on being sequential large transfers so they allow an intelligent subsystem – fundamentally its manageability.  You want the subsystems to scale out to very large infrastructures and yet not become a real headache for the owners to manage. You want the systems, like Panasas does, to have a central view of all of their storage and make decisions on how to use the storage, how to add storage, how to retarget storage, how to tier the data around…you want that to be as built-in to the infrastructure as possible. That’s going to mean self-healing, that’s going to mean autonomics, that’s going to mean a large variety of specializations inside the subsystem. So it makes storage, as a complete solution, much more intelligent. I think that SSDs make it easier for that storage to do things in small pieces, to maintain big tables of internal measurements to make sure they are making the right decisions and to make them over a long period of time.

SC: I think this is a related more general question: how do you see SSDs, kind of long term, fitting in with the direction that Panasas is taking? How do their capabilities map on to where you see yourselves headed as a company?

GG: I think that the big trend will be toward much more data and much more big data and much more small data. So the storage subsystem is going to need to be very good in two extreme cases: 1) it’s going to have a huge amount of data it’s going to need to keep for a long time that comes out of it at very high speeds; and 2) It’s going to have to absorb that; we want that on disk for cost management, and yet it’s going to have a whole lot of metadata coming by very fast that’s optimizing navigation within the large data, but is also enriching new types of search and we’re going to have to be very fast with that. I think the net result is that you’re going to want to have an infrastructure that’s extremely smart about how to manage its database of very small structured access and very large unstructured access.

SC: One other question and we may be off the SSD topic at this point, but one of the desires, I wouldn’t call it a trend yet, that people have is to be able to pre-filter data so that (compression is another strategy that could be used), you store only what you really need to store. Where do you see that fitting into Panasas?

GG: In some sense, one of the hardest bottlenecks in computing coming down the pipe is the memory, the DRAM bandwidth bottleneck. That is not scaling up, at least not cost effectively scaling up as well as anything else so what I see is endemic pre-filtering, indexing, compression, to try and manage that bottleneck as much as possible. We will compress in order to stream large amounts of data to disk more efficiently, but I think the really big win is compressing so that it comes out of the memory efficiently and goes back into the memory efficiently. I think we will be doing a lot more of various forms of representing the data at multiple resolutions, fully enhance the raw data, the partially processed or pre-filtered, the fully indexed for random search access versions of the same data.