Contact Us

Thank You

The form was submitted successfully. We will be in touch with you soon.

How HPC storage is adapting to Enterprise requirements

Curtis Anderson | May 21, 2018

As noted by Addison Snell of Intersect360 Research, the commercial segment has emerged as the primary area of growth in the high-performance-computing storage market. As the utilization of HPC in the Enterprise increases, whether it’s from growth in the use of computer-aided engineering to increase quality or from the transformative impact of Artificial Intelligence (AI) or Machine Learning (ML) tasks on how an Enterprise views its trove of customer data, it’s important to understand that HPC storage is not just bigger and faster than traditional Enterprise storage, its architecture is also fundamentally different. Monolithic anything won’t cut it in this segment. What’s needed is an efficient, linear scale-out solution. So let’s take a closer look at how HPC storage subsystems need to evolve in order to effectively address these Enterprise workload requirements.

Your innovation should be a science project, not your storage
Ease of use has been expected and delivered in the Enterprise market for a long time, but the norm in the HPC market has been a focus on speed above reliability, availability, and serviceability (RAS). The new applications like AI/ML are going to take enough heavy thought that you don’t want to spend extra effort on your storage RAS or performance, and the new apps will need to be always-on, so the HPC storage subsystems will have to develop the RAS that the Enterprise requires. Panasas has always had a focus on adaptability and ease-of-use, highly reliable storage that just gets the job done.

Widening the sweet spot of the performance profile
In some use cases, the algorithms and data types are changing too quickly to bother with highly optimizing the codes, e.g. in genomics and AI/ML, and storage subsystems will need to adapt. Industry segments such as Computational Fluid Dynamics (CFD) in manufacturing are mature and the optimal I/O patterns are well understood, while in other segments they are not. New workloads with large numbers of small files are becoming increasingly important with users that want to concentrate on gaining new insights rather than optimizing their code. Other HPC storage systems need to widen the sweet spot of their performance profile from the very-high-but-narrow peak offered by traditional architectures in order to match the flexibility offered by the Panasas ActiveStor solution.

New data management and compliance feature sets
HPC storage subsystems will need to implement several different types of data management and compliance feature sets. Data provenance, where a piece of data came from and who has modified it, is going to become critical in all the core growth segments of HPC storage. Specific compliance regimes will be required in several vertical segments as well. In the life sciences and precision medicine space, for example, HIPAA Compliance will be required. In the financial analysis space, at least for personalized analysis, SOX Compliance will be required. In the AI/ML space, a push for data provenance will be driven by a need to show that best practices were followed while managing the training set for the neural network.

These trends are going to push traditional HPC storage closer to Enterprise storage in terms of features supported, while retaining the performance required from HPC storage. We’re on that path now and Panasas is ready for it.


Related Blogs