This is part two in a Q&A series with Panasas founder and CTO, Dr. Garth Gibson. He was recently asked by a large storage research center to comment on various HPC storage topics. Garth’s thoughts on solid state technology were captured in the following exchange.
Question: Solid state drives have given a renewed impetus to the development of cache solutions. Both hardware and software solutions have been introduced to market. Some examples are Fusion-IO’s IO-cache, LSI’s MegaRAID cache software, Adaptec’s MaxCache, Sanrad’s VXL software. Of these, the first three are hardware manufacturers selling software as an enhancement to their offerings. Sanrad’s solution is purely software. Against this backdrop, can members of the SAB comment on:
Gibson: It is hard to make money long term with pure software solutions in the storage space. Such solutions cannot defend high margins and get bought by systems vendors or reverse engineered and offered without additional cost by operating systems.
Question: There are two possibilities for placing the NVM/FLASH across the storage system architecture: Distributed across disk (eg Hybrid Drives), or Aggregated/Centralized as a single collection pool (e.g. SSDs).
Normally, a distributed architecture is better than a centralized architecture in terms of scalability, performance and resources. Does the Distributed NVM architecture (based on a hybrid drive array) have the necessary advantages for deployment in enterprise application over centralized SSDs architecture? If not, what are the possible explanations?
Gibson: Solid-state embedded inside a magnetic disk is effective as a performance enhancing cache, but it is isolated on the disk side of the storage interconnect. This means that its economics are embedded in disk economics and its use by the host applications is hard to specialize. Moreover, the very low latency possible with solid-state is greatly diminished by communication through interconnects designed for multi-millisecond access. The PCIe-based SSDs yield much higher access rates, higher bandwidths and high margins. I expect a wide variety of solid-state use in non-disk devices, not limited by SATA and SAS.
Question: What are the research trends for storage Quality of Service (QoS)? How feasible is it for storage system to deliver QoS? Besides as a high QoS class, what other ways can the use of SSD improve the storage system delivery on QoS?
Gibson: QoS for storage is mostly a system level concept, provided by significant computers called disk arrays (rather than the servers that they really are). Tiering is the latest QoS concept to set the storage world afire and it is not near run its course.
Question: There are currently two ways to integrate next generation Non-volatile Memory (e.g. PCRAM, STT-MRAM, RRAM) into computer system: through the memory bus and through the I/O bus. Memory bus connected NVM presents system byte-addressable memory space (such as DRAM). And I/O bus connected NVM presents system block storage space (such as Fusion-IO). Accordingly there are two types of file system design: file system over memory space (e.g. ramfs) and file system over block space (e.g. current disk-based file system). Which approach do you think is more suitable for next generation NVM?
Gibson: At this point both should be pursued. The potential impact of solid-state NVRAM on the memory bus is much larger, but this is also likely to be a much more invasive change. The integration of solid-state into storage on the other side of the IO bus is much simpler and much less invasive to the way hardware and software are designed and operated.