Contact Sales

RSS

Solving the Compute Client Distribution Problem

Geoffrey Noer

VP of Product Management

Scale-out NAS vendors like to brag about what they’ve done inside their storage architecture to distribute data evenly and allow the performance of their systems to scale as additional storage nodes are added. And at Panasas, we’re no exception.

However, while linear performance scaling inside a storage architecture is indeed critically important, that is only one half of the picture. With many scale-out NAS systems, system administrators frequently run into a serious challenge – how to evenly balance their compute clients across the storage system.

The root cause of this problem is that many scale-out NAS systems are designed for NFS v3 and CIFS protocol access and these protocols were not developed for parallel, scale-out NAS architectures. Instead, these legacy protocols assume a one-to-one mapping between an individual compute client and an individual IP address corresponding to a storage server node within the scale-out storage system. Since the goal is to have many compute clients accessing many storage servers in parallel, a round-robin technique is typically employed, where each client mounts a different storage server.

In the ideal world, this can work out ok – for example, a scale-out NAS system with ten storage server nodes accessed by fifty compute clients would mean five clients per storage server node. As long as the round-robin distribution correctly allocates the IP addresses and all clients put an identical load on the storage system, you would most likely see the performance promised by the storage system. But how often does that really happen?

In practice, not often. In most situations, accessing clients don’t end up evenly allocated across the scale-out NAS storage server nodes (e.g. if you have ten storage server nodes and 25 compute clients, half of the storage server nodes would have 50% higher load than the others). If the compute clients are all working on a shared technical computing workload that is I/O bound, this imbalance would likely make the job take 50% longer to run. Or in another scenario, maybe the clients are all evenly balanced but the load isn’t. Again, you would run into a significant drop in performance because there would be some underutilized storage server nodes while the busiest clients compete for storage resources more than they should have to.

So what’s the best cure? Not a paid-for feature that tries to rebalance NFS clients on the fly as imbalances are detected, that’s for sure. Instead, the best solution is simply to never encounter the imbalances in the first place. To do that, just deploy a scale-out NAS system designed from the start to support a parallel protocol. Coupling a scale-out NAS architecture with a parallel protocol like Panasas DirectFlow or the emerging pNFS protocol means achieving optimally balanced clients and an evenly balanced load right from the beginning because those protocols ensure that every compute client will access many (and sometimes even all) storage server nodes when reading and writing data.

So in summary, this is a good example of where going half way isn’t far enough. A scale-out NAS system without a parallel protocol is insufficient, just as taking a legacy scale-up architecture and slapping on a parallel protocol also won’t cut it. On the other hand, Panasas designed its ActiveStor solutions for the true parallelism needed for the highest performance HPC workloads and then coupled that with the Enterprise features needed to succeed in commercial technical computing. For more information, check out the ActiveStor 14 product page and thanks as always for reading our blog!