pNFS According to Garth and IDC
January 24, 2013 - 6:37pm
Panasas founder and chief scientist, Dr. Garth Gibson, and Steve Conway, research vice president in IDC's high performance computing group took some time recently to discuss HPC trends and forces affecting storage in the HPC market. The role that pNFS will play in HPC storage is discussed in this first of three video chapters. The role for solid state technology in HPC installations and big data will both be explored in upcoming installments.
Panasas and IDC pNFS discussion transcript:
Steve Conway: The first question I have for you is about pNFS. That standard has been developing and every year it’s kind of ‘is this the year for pNFS’? Where do you see that?
Garth Gibson: This is the year that Red Hat will ship the first working versions of pNFS in the client code. We’ve of course had server versions for a while. We needed the standard first – we got the standard. Then we needed server versions from the main providers. We have a few of those out. Now, what Panasas has really been focused on is the quality of the code going into the Linux clients. We’ve had good people working on that for a few years now. That code made it into the head of tree late last year and its now being pushed out through Red Hat in particular in their first experimental release of pNFS. It’s not all the protocols yet. There’s quite a bit coming down the pipe and there is, in fact, new energy back in other back-end protocols to take advantage of other technologies. So pNFS is burgeoning in terms of engineering effort going into it, proprietary clients for other players, the availability of Linux versions and multiple protocols – the technology is all in place. We’re seeing pull from customers. They like the idea of the commodity infrastructure. It really fits into the notion of the Hadoop world if you have high speed parallel file systems that are supporting NAS – there is a protocol that is supported by all the clients – then it fits in under Hadoop really well.
So, yes, we will have the first commercial implementations this year (2013). I see more complete new protocols rolling out of the distribution houses in the next year or two.
SC: Who will be the early adopters in your view? Will it be on the enterprise side? Will it be on the government side? Or is there a profile that spans those spaces?
GG: I think it’s more likely that it’s the enterprise side. I think that pNFS fits into the big iron of the mainstream NFS reliable storage and its more about the multi-sourcing the technology. The high end government sites will support protocols that have one supplier and narrowly focused expertise – that do very specific things. I think that they have spent quite a bit on things like the Lustre base that’s going into Intel’s Whamcloud (for example) and I think they will continue spending a lot on that and having it solve very specific problems for them. I think pNFS is targeted more at the high end of the classic NFS space and getting fast enough and parallel enough that it can play in the broader HPC space. I think that will bite off the people that are more concerned with integrated, enterprise, and HPC computing.
SC: At the storage architecture levels, how do you see pNFS mapping onto trends in storage architectures?
GG: pNFS will mostly look like an evolution of the NFS infrastructure. So in as much as file based storage has manageability advantages even in the virtualized computing worlds, I think pNFS will make that easier to deliver the performance that you expect. I think it will strengthen the play of the file infrastructures. I think that other techniques like advanced SSD are neutral to pNFS. We have figured out how to use SSD in the local machines first, but we can put it in, and Panasas has it in, the NFS/pNFS server so it makes our wide range of support for small and large files much more effective. I think we’re going to see more and more SSD being used for data of particular importance and we’re going to want to do that both in the local and in the network storage. So I think it becomes a good base for all the high end storage.
SC: You mentioned virtualized environments. Do you think that pNFS also has a nice role to play in private or public clouds?
GG: The public cloud-end players are very rich in technology and extremely cost conscious because they are operating their own infrastructures. They are probably the least likely to do something because it’s now best-in-class and the most likely to do it because they can build some component of it themselves. So I think that’s not the first place (you will see it). I think the private cloud where the ability to rely on vendors that can produce best-in-class and support it and make sure the darn thing works has a real advantage. I think the integration into the existing disaster recovery and manageability infrastructures are more important in the private cloud world.
SC: You mentioned Red Hat as an implementer this year. How do you think pNFS is going to be driven into the marketplace? Who will be pushing that? Yourselves, I assume is one of them.
GG: Yes. Panasas is pushing it. I can name the companies that are the most active in the development…
SC: Not so much the names of the companies, but the categories of players in the marketplace who are going to be pushing this.
GG: I think the plan for pNFS in the marketplace is to provide, to really feed, the clusters that are doing lots of computations. That has been more HPC in the past, but it has been very much the big data world and that is what the marketing is looking at – traditional NFS just did not satisfy and so you saw these localized, converged file systems being invented. But with the NFS/pNFS speed availability, then I think you get the advantages of specialization and reliability enhancements in the dedicated storage.
SC: So, do you see it, as we do at IDC, anyway, that kind of an on-going dissolution of the boundaries between traditional HPC and some parts of enterprise computing?
GG: Absolutely. I think that the business side of HPC is less and less special purpose project/custom built – the costs are too high. They are more about solving high end compute problems that are shared across industry and government.